首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   27760篇
  免费   5816篇
  国内免费   4114篇
电工技术   1489篇
综合类   3465篇
化学工业   1215篇
金属工艺   1181篇
机械仪表   2129篇
建筑科学   2914篇
矿业工程   738篇
能源动力   644篇
轻工业   624篇
水利工程   425篇
石油天然气   1049篇
武器工业   335篇
无线电   4468篇
一般工业技术   2715篇
冶金工业   695篇
原子能技术   96篇
自动化技术   13508篇
  2024年   341篇
  2023年   746篇
  2022年   1478篇
  2021年   1483篇
  2020年   1551篇
  2019年   1216篇
  2018年   1096篇
  2017年   1172篇
  2016年   1312篇
  2015年   1400篇
  2014年   1888篇
  2013年   1782篇
  2012年   2317篇
  2011年   2336篇
  2010年   1864篇
  2009年   1761篇
  2008年   1886篇
  2007年   2083篇
  2006年   1787篇
  2005年   1547篇
  2004年   1248篇
  2003年   1011篇
  2002年   848篇
  2001年   693篇
  2000年   598篇
  1999年   488篇
  1998年   407篇
  1997年   338篇
  1996年   248篇
  1995年   181篇
  1994年   144篇
  1993年   83篇
  1992年   78篇
  1991年   56篇
  1990年   51篇
  1989年   40篇
  1988年   21篇
  1987年   10篇
  1986年   14篇
  1985年   16篇
  1984年   11篇
  1983年   17篇
  1982年   9篇
  1981年   3篇
  1980年   9篇
  1979年   5篇
  1978年   4篇
  1977年   5篇
  1973年   2篇
  1959年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
51.
In this paper we present a new radiosity algorithm, based on the notion of a well distributed ray set (WDRS). A WDRS is a set of rays, connecting mutually visible points and patches, that forms an approximate representation of the radiosity operator and the radiosity distribution. We propose an algorithm that constructs an optimal WDRS for a given accuracy and mesh. The construction is based on discrete importance sampling as in previously proposed stochastic radiosity algorithms, and on quasi Monte Carlo sampling. Quasi Monte Carlo sampling leads to faster convergence rates and the fact that the sampling is deterministic makes it possible to represent the well distributed ray set very efficiently in computer memory. Like previously proposed stochastic radiosity algorithms, the new algorithm is well suited for computing the radiance distribution in very complex diffuse scenes, when it is not feasible to explicitly compute and store form factors as in classical radiosity algorithms. Experiments show that the new algorithm is often more efficient than previously proposed Monte Carlo radiosity algorithms by half an order of magnitude and more.  相似文献   
52.
Speed-up fractal image compression with a fuzzy classifier   总被引:4,自引:0,他引:4  
This paper presents a fractal image compression scheme incorporated with a fuzzy classifier that is optimized by a genetic algorithm. The fractal image compression scheme requires to find matching range blocks to domain blocks from all the possible division of an image into subblocks. With suitable classification of the subblocks by a fuzzy classifier we can reduce the search time for this matching process so as to speedup the encoding process in the scheme. Implementation results show that by introducing three image classes and using fuzzy classifier optimized by a genetic algorithm the encoding process can be speedup by about 40% of an unclassified encoding system.  相似文献   
53.
离散效应存在时孤子脉冲的增强压缩   总被引:1,自引:0,他引:1  
在群速度不匹配的条件下,提出一种提高孤子脉冲压缩效应的新方法.研究表明,若使信号脉冲与泵浦脉冲存在一适当的初始延时,则不但可以提高离散效应单独存在时信号脉冲的压缩比,而且可以提高孤子脉冲的压缩质量.  相似文献   
54.
A system-on-chip (SOC) usually consists of many memory cores with different sizes and functionality, and they typically represent a significant portion of the SOC and therefore dominate its yield. Diagnostics for yield enhancement of the memory cores thus is a very important issue. In this paper we present two data compression techniques that can be used to speed up the transmission of diagnostic data from the embedded RAM built-in self-test (BIST) circuit that has diagnostic support to the external tester. The proposed syndrome-accumulation approach compresses the faulty-cell address and March syndrome to about 28% of the original size on average under the March-17N diagnostic test algorithm. The key component of the compressor is a novel syndrome-accumulation circuit, which can be realized by a content-addressable memory. Experimental results show that the area overhead is about 0.9% for a 1Mb SRAM with 164 faults. A tree-based compression technique for word-oriented memories is also presented. By using a simplified Huffman coding scheme and partitioning each 256-bit Hamming syndrome into fixed-size symbols, the average compression ratio (size of original data to that of compressed data) is about 10, assuming 16-bit symbols. Also, the additional hardware to implement the tree-based compressor is very small. The proposed compression techniques effectively reduce the memory diagnosis time as well as the tester storage requirement.  相似文献   
55.
该文研究了T300碳纤维单向增强的环氧复合材料,在应变率从10~(-3)/s到10~3/s范围内的冲击拉伸行为.通过对实验数据进行拟合,得出该范围内材料对应变率具有弱的敏感性,表现在破坏强度及破坏应变随应变率增加不显著变化,平均模量几乎不受应变率的影响.分析了试件的几何尺寸效应,讨论了应力波作用对破坏形态的影响以及实验中观察到的拔出现象.从应变率在10~2~10~3/s附近材料行为某些非确定性,指出在更宽范围内了解其性能的必要性.  相似文献   
56.
基于自组织神经网络的实时统计特征抽取方法   总被引:3,自引:0,他引:3  
应用人工神经网络讨论统计了特征抽取的原理,提出了相应的抽取方法。该方法具有实时性,容错性,高数据压缩率和良好的特征分辨率。通过计算机仿真和桥梁索力状态的实例计算分析证实了该技术具有广泛的应用价值。  相似文献   
57.
Since Samuel's work on checkers over thirty years ago, much effort has been devoted to learning evaluation functions. However, all such methods are sensitive to the feature set chosen to represent the examples. If the features do not capture aspects of the examples significant for problem solving, the learned evaluation function may be inaccurate or inconsistent. Typically, good feature sets are carefully handcrafted and a great deal of time and effort goes into refining and tuning them. This paper presents an automatic knowledge-based method for generating features for evaluation functions. The feature set is developed iteratively: features are generated, then evaluated, and this information is used to develop new features in turn. Both the contribution of a feature and its computational expense are considered in determining whether and how to develop it further.
This method has been applied to two problem-solving domains: the Othello board game and the domain of telecommunications network management. Empirical results show that the method is able to generate many known features and several novel features and to improve concept accuracy in both domains.  相似文献   
58.
洪庆月  周志林 《电子器件》1996,19(3):210-214
本文介绍了在计算机上实现的二种心电图数据压缩方法,并详述了每种压缩法对心电图压缩技术的压缩比、失真度的影响。  相似文献   
59.
特征选择是文本分类的关键步骤之一,所选特征子集的优劣直接影响文本分类的结果。论文首先定义了两种特征分类能力:一种是特征对类间文档的分散程度,该分散度越大越好;另一种是特征对类内文档的聚集程度,该集中度越大越好。然后把这两种特征影响度有机地结合起来设计了一个新的特征选择方法,该方法能够对所选特征进行综合考虑,从而使获得的特征集具有较好的代表性。仿真实验表明所提特征选择方法在一定程度上能够提高文本分类性能。  相似文献   
60.
双正交变换为基于块的低复杂度变换,而且与传统的离散余弦变换相比,一定程度上减小了变换后图像的块效应,因而被采纳入联合图像专家组织JPEG(Joint Photographic Expert Group)最新制定的静态图像编码标准JPEG XR中。为了能够改善其无法实现码流长度控制的缺陷,文章深入研究了JPEG XR的编码技术,提出了一种针对固定压缩比的编码算法。主要思路是通过对双正交变换后的系数进行嵌入式位平面编码,取代了原先的量化步骤,使得压缩码流长度可以精确控制。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号